34 research outputs found

    Debugging Memory Issues In Embedded Linux: A Case Study

    Full text link
    Debugging denotes the process of detecting root causes of unexpected observable behaviors in programs, such as a program crash, an unexpected output value being produced or an assertion violation. Debugging of program errors is a difficult task and often takes a significant amount of time in the software development life cycle. In the context of embedded software, the probability of bugs is quite high. Due to requirements of low code size and less resource consumption, embedded softwares typically do away with a lot of sanity checks during development time. This leads to high chance of errors being uncovered in the production code at run time. In this paper we propose a methodology for debugging errors in BusyBox, a de-facto standard for Linux in embedded systems. Our methodology works on top of Valgrind, a popular memory error detector and Daikon, an invariant analyzer. We have experimented with two published errors in BusyBox and report our findings in this paper.Comment: In proceedings of IEEE TechSym 2011, 14-16 January, 2011, IIT kharagpur, Indi

    Set Augmented Finite Automata over Infinite Alphabets

    Full text link
    A data language is a set of finite words defined on an infinite alphabet. Data languages are used to express properties associated with data values (domain defined over a countably infinite set). In this paper, we introduce set augmented finite automata (SAFA), a new class of automata for expressing data languages. We investigate the decision problems, closure properties, and expressiveness of SAFA. We also study the deterministic variant of these automata.Comment: This is a full version of a paper with the same name accepted in DLT 2023. Other than the full proofs, this paper contains several new results concerning more closure properties, universality problem, comparison of expressiveness with register automata and class counter automata, and more results on deterministic SAF

    A Framework for Automated Correctness Checking of Biochemical Protocol Realizations on Digital Microfluidic Biochips

    Full text link
    Recent advances in digital microfluidic (DMF) technologies offer a promising platform for a wide variety of biochemical applications, such as DNA analysis, automated drug discovery, and toxicity monitoring. For on-chip implementation of complex bioassays, automated synthesis tools have been developed to meet the design challenges. Currently, the synthesis tools pass through a number of complex design steps to realize a given biochemical protocol on a target DMF architecture. Thus, design errors can arise during the synthesis steps. Before deploying a DMF biochip on a safety critical system, it is necessary to ensure that the desired biochemical protocol has been correctly implemented, i.e., the synthesized output (actuation sequences for the biochip) is free from any design or realization errors. We propose a symbolic constraint-based analysis framework for checking the correctness of a synthesized biochemical protocol with respect to the original design specification. The verification scheme based on this framework can detect several post-synthesis fluidic violations and realization errors in 2D-array based or pin-constrained biochips as well as in cyberphysical systems. It further generates diagnostic feedback for error localization. We present experimental results on the polymerase chain reaction (PCR) and in-vitro multiplexed bioassays to demonstrate the proposed verification approach

    Verifying Coalitions in 3-Party Systems

    No full text

    The open family of temporal logics: annotating temporal operators with input constraints

    No full text
    Assume-guarantee style verification of modules relies on the appropriate modeling of the interaction of the module with its environment. Popular temporal logics such as Computation Tree Logic (CTL) and Linear Temporal Logic (LTL) that were originally defined for closed systems (Kripke structures) do not make any syntactic discrimination between input and output variables. As a result, these logics and their recent derivatives (such as System Verilog, Sugar, Forspec, etc) permit the specification of properties that have some semantic problems when interpreted over open systems or modules. These semantic problems are quite common in practice, but are computationally hard to detect within a given specification. In this article, we propose a new style for writing temporal specifications of open systems that helps the designer to avoid most of these problems. In the proposed style, the basic temporal operators (such as next and until) are annotated with assume constraints over the input variables. We formalize this style through an extension of LTL, namely Open-LTL and an extension of CTL with fairness, called Open-CTL. We show that this simple syntactic separation between the assume and the guarantee achieves the desired results. We show that the proposed style can be integrated with traditional symbolic model-checking techniques and present a complete tool for the verification of Verilog RTL modules in isolation

    QoS Constrained Large Scale Web Service Composition Using Abstraction Refinement

    No full text

    Scheduling with Task Duplication for Application Offloading

    No full text
    Computation offloading frameworks partition an application\u27s execution between a cloud server and the mobile device to minimize its completion time on the mobile device. An important component of an offloading framework is the partitioning algorithm that decides which tasks to execute on mobile device or cloud server. The partitioning algorithm schedules tasks of a mobile application for execution either on mobile device or cloud server to minimize the application finish time. Most offloading frameworks partition parallel applications devices using an optimization solver which takes a lot of time. We show that by allowing duplicate execution of selected tasks on both the mobile device and the remote cloud server, a polynomial algorithm exists to determine a schedule that minimizes the completion time. We use simulation on both random data and traces to show the savings in both finish time and scheduling time over existing approaches. Our trace-driven simulation on benchmark applications shows that our algorithm reduces the scheduling time by 8 times compared to a standard optimization solver while guaranteeing minimum makespan
    corecore